Determination of weights for relaxation recurrent neural networks

نویسندگان

  • Gürsel Serpen
  • David L. Livingston
چکیده

A theorem which establishes the solutions of a given optimization problem as stable points in the state space of single-layer relaxation-type recurrent neural networks is proposed. This theorem establishes the necessary conditions for the neural network to converge to a solution by proposing certain values for the constraint weight parameters of the network. Convergence performance of the discrete Hopfield network with the proposed bounds on constraint weight parameters is tested on a set of constraint satisfaction and optimization problems including the Traveling Salesman Problem, the Assignment Problem, the Weighted Matching Problem, the N-Queens Problem and the Graph Path Search Problem. Simulation and stability analysis results indicate that the set of solutions become a subset of the set of stable points in the state space as a result of the suggested bounds. For the cases of the Traveling Salesman, Assignment and Weighted Matching Problems, two sets are equal leading to convergence to a solution after each relaxation. Convergence to a solution after each relaxation is not guaranteed for the N-Queens and the Graph Path Search Problems since the solution set is a proper subset of the stable point set. Furthermore the simulation results indicate that the discrete Hopfield network converged to mostly average quality solutions as expected from a gradient-descent search algorithm. In conclusion, the suggested bounds on weight parameters guarantee that the discrete Hopfield network will locate a solution after each relaxation for a class of optimization problems of any size, although the solutions will be average quality rather than optimum. Definitions Def. 1. The state space set contains all 2N N-bit binary vectors for an N-node network. Def. 2. The stable point set includes the binary vectors which are stable points of the recurrent network dynamics for a given optimization problem. Def. 3. The solution set contains those N-bit binary vectors for an N-node recurrent network which are solutions of an optimization problem. Def. 4. A relaxation of the recurrent network is the total computation effort expended starting from an initial state until convergence to a final state. Def. 5. A constraint is called hard if violating that constraint necessarily prevents the network from finding a solution. Def. 6. A soft constraint is employed to map a cost measure associated with the quality of a solution as typically found in optimization problems.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Neuro-Optimizer: A New Artificial Intelligent Optimization Tool and Its Application for Robot Optimal Controller Design

The main objective of this paper is to introduce a new intelligent optimization technique that uses a predictioncorrectionstrategy supported by a recurrent neural network for finding a near optimal solution of a givenobjective function. Recently there have been attempts for using artificial neural networks (ANNs) in optimizationproblems and some types of ANNs such as Hopfield network and Boltzm...

متن کامل

Performance Analysis of a New Neural Network for Routing in Mesh Interconnection Networks

Routing is one of the basic parts of a message passing multiprocessor system. The routing procedure has a great impact on the efficiency of a system. Neural algorithms that are currently in use for computer networks require a large number of neurons. If a specific topology of a multiprocessor network is considered, the number of neurons can be reduced. In this paper a new recurrent neural ne...

متن کامل

Robust stability of stochastic fuzzy impulsive recurrent neural networks with\ time-varying delays

In this paper, global robust stability of stochastic impulsive recurrent neural networks with time-varyingdelays which are represented by the Takagi-Sugeno (T-S) fuzzy models is considered. A novel Linear Matrix Inequality (LMI)-based stability criterion is obtained by using Lyapunov functional theory to guarantee the asymptotic stability of uncertain fuzzy stochastic impulsive recurrent neural...

متن کامل

Efficient Short-Term Electricity Load Forecasting Using Recurrent Neural Networks

Short term load forecasting (STLF) plays an important role in the economic and reliable operation ofpower systems. Electric load demand has a complex profile with many multivariable and nonlineardependencies. In this study, recurrent neural network (RNN) architecture is presented for STLF. Theproposed model is capable of forecasting next 24-hour load profile. The main feature in this networkis ...

متن کامل

Training Simultaneous Recurrent Neural Network with Resilient Propagation for Static Optimization

This paper proposes a non-recurrent training algorithm, resilient propagation, for the Simultaneous Recurrent Neural network operating in relaxation-mode for computing high quality solutions of static optimization problems. Implementation details related to adaptation of the recurrent neural network weights through the non-recurrent training algorithm, resilient backpropagation, are formulated ...

متن کامل

Stability of Simultaneous Recurrent Neural Network Dynamics for Static Optimization

A new trainable and recurrent neural optimization algorithm, which has potentially superior capabilities compared to existing neural search algorithms to compute high quality solutions of static optimization problems in a computationally efficient manner, is studied. Specifically, local stability analysis of the dynamics of a relaxation-based recurrent neural network, the Simultaneous Recurrent...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Neurocomputing

دوره 34  شماره 

صفحات  -

تاریخ انتشار 2000